Robust boosting for regression problems

نویسندگان

چکیده

Gradient boosting algorithms construct a regression predictor using linear combination of “base learners”. Boosting also offers an approach to obtaining robust non-parametric estimators that are scalable applications with many explanatory variables. The algorithm is based on two-stage approach, similar what done for regression: it first minimizes residual scale estimator, and then improves by optimizing bounded loss function. Unlike previous proposals this does not require computing ad hoc estimator in each iteration. Since the functions involved typically non-convex, reliable initialization step required, such as L1 tree, which fast compute. A variable importance measure can be calculated via permutation procedure. Thorough simulation studies several data analyses show that, when no atypical observations present, works well standard gradient squared loss. Furthermore, contain outliers, outperforms alternatives terms prediction error selection accuracy.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Boosting methodology for regression problems

Classification problems have dominated research on boosting to date. The application of boosting to regression problems, on the other hand, has received little investigation. In this paper we develop a new boosting method for regression problems. We cast the regression problem as a classification problem and apply an interpretable form of the boosted naïve Bayes classifier. This induces a regre...

متن کامل

Robust Regression by Boosting the Median

Most boosting regression algorithms use the weighted average of base regressors as their final regressor. In this paper we analyze the choice of the weighted median. We propose a general boosting algorithm based on this approach. We prove boosting-type convergence of the algorithm and give clear conditions for the convergence of the robust training error. The algorithm recovers ADABOOST and ADA...

متن کامل

Local Boosting of Decision Stumps for Regression and Classification Problems

Numerous data mining problems involve an investigation of associations between features in heterogeneous datasets, where different prediction models can be more suitable for different regions. We propose a technique of boosting localized weak learners; rather than having constant weights attached to each learner (as in standard boosting approaches), we allow weights to be functions over the inp...

متن کامل

A Gradient-Based Boosting Algorithm for Regression Problems

In adaptive boosting, several weak learners trained sequentially are combined to boost the overall algorithm performance. Recently adaptive boostingmethods for classification problems have been derived asgradient descent algorithms. This formulation justifies key elements and parameters in the methods, all chosen to optimize a single common objective function. Wepropose an analogous formulation...

متن کامل

Combining Bagging, Boosting and Random Subspace Ensembles for Regression Problems

Bagging, boosting and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-regressor. In this work, we built an ensemble of bagging, boosting and random subspace methods ensembles with 8 sub-regressors in each one and then an averaging methodology is used for the final prediction. We ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Statistics & Data Analysis

سال: 2021

ISSN: ['0167-9473', '1872-7352']

DOI: https://doi.org/10.1016/j.csda.2020.107065